我们考虑了个性化新闻推荐的问题,每个用户都以顺序消费新闻。现有的个性化新闻推荐方法的重点是利用用户兴趣,而忽略了推荐中的探索,从而导致反馈循环并长期损害了建议质量。我们基于上下文土匪推荐策略,自然可以解决剥削 - 探索权衡取舍。主要挑战是探索大规模项目空间并利用不确定性的深层表示的计算效率。我们提出了一个两阶段的分层主题,新的深层上下文强盗框架,以在有许多新闻项目时有效地学习用户偏好。我们为用户和新闻使用深度学习表示形式,并将神经上限限制(UCB)策略推广到广义添加剂UCB和BILINEAR UCB。大规模新闻建议数据集的经验结果表明,我们提出的政策是有效的,并且表现优于基线匪徒政策。
translated by 谷歌翻译
脱机策略学习(OPL)利用现有数据收集了策略优化的先验,而无需任何活动探索。尽管存在普遍性和近期对此问题的兴趣,但其函数近似设置中的理论和算法基础仍然持续开发。在本文中,我们考虑了在具有神经网络的离线上下文匪徒中的分布换档,优化和泛化轴上的这个问题。特别是,我们提出了一种可从线有效的离线情境匪徒,具有神经网络函数近似,不需要对奖励的任何功能假设。我们表明,我们的方法在较温和的情况下通过不良语境提供了比现有的OPL工作的分支变换。值得注意的是,与任何其他OPL方法不同,我们的方法使用随机梯度血统以在线方式从脱机数据中学习,允许我们利用在线学习的优势进入离线设置。此外,我们表明我们的方法更加计算效率,并且更好地依赖于神经网络的有效维度而不是在线对应物。最后,我们展示了我们在一系列合成和现实世界OPL问题中的方法的实证效果。
translated by 谷歌翻译
离线增强学习(RL)利用了先前收集的数据进行策略优化,而无需进行任何进一步的积极探索。尽管最近对这个问题引起了人们的兴趣,但其对神经网络功能近似设置的理论结果仍然有限。在本文中,我们研究了具有深层Relu网络函数近似的离线RL的统计理论。特别是,我们建立了$ \ tilde {\ mathcal {o}} \ left(\ kappa^{1 + d/\ alpha} \ cdot \ epsilon^{ - 2-2-2d/\ alpha} \ right)的样本复杂度$ for Offline RL带有深层relu网络,其中$ \ kappa $是分配变化的度量,$ d $是国家行动空间的尺寸,$ \ alpha $是基础马尔可夫的(可能是分数)平滑度参数决策过程(MDP)和$ \ epsilon $是用户指定的错误。值得注意的是,我们的样本复杂性在两个新颖的考虑因素下,即动态闭合和离线RL的价值回归产生的相关结构。尽管BESOV动态闭合在先前的作品中概括了离线RL的动态条件,但相关结构使离线RL的先前工作与常规/神经网络功能近似不当或效率低下。据我们所知,这是离线RL样品复杂性具有深层神经网络功能近似的第一个理论表征,该效果在普遍的BESOV规律性条件下,超出了传统的繁殖Hilbert内核空间和神经切线内核的范围。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting. We introduce an adaptive transfer algorithm that learns the similarities among the data sources by utilizing Random Fourier Features to disentangle the loss function into multiple components, each of which is associated with a data source. The data sources may have different distributions; the causal effects are independently and systematically incorporated. The proposed method estimates the similarities among the sources through transfer coefficients, and hence requiring no prior information about the similarity measures. The heterogeneous causal effects can be estimated with no sharing of the raw training data among the sources, thus minimizing the risk of privacy leak. We also provide minimax lower bounds to assess the quality of the parameters learned from the disparate sources. The proposed method is empirically shown to outperform the baselines on decentralized data sources with dissimilar distributions.
translated by 谷歌翻译
In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.
translated by 谷歌翻译
Out-of-distribution (OOD) generalisation aims to build a model that can well generalise its learnt knowledge from source domains to an unseen target domain. However, current image classification models often perform poorly in the OOD setting due to statistically spurious correlations learning from model training. From causality-based perspective, we formulate the data generation process in OOD image classification using a causal graph. On this graph, we show that prediction P(Y|X) of a label Y given an image X in statistical learning is formed by both causal effect P(Y|do(X)) and spurious effects caused by confounding features (e.g., background). Since the spurious features are domain-variant, the prediction P(Y|X) becomes unstable on unseen domains. In this paper, we propose to mitigate the spurious effect of confounders using front-door adjustment. In our method, the mediator variable is hypothesized as semantic features that are essential to determine a label for an image. Inspired by capability of style transfer in image generation, we interpret the combination of the mediator variable with different generated images in the front-door formula and propose novel algorithms to estimate it. Extensive experimental results on widely used benchmark datasets verify the effectiveness of our method.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
语义分割是开发医学图像诊断系统的重要任务。但是,构建注释的医疗数据集很昂贵。因此,在这种情况下,半监督方法很重要。在半监督学习中,标签的质量在模型性能中起着至关重要的作用。在这项工作中,我们提出了一种新的伪标签策略,可提高用于培训学生网络的伪标签的质量。我们遵循多阶段的半监督训练方法,该方法在标记的数据集上训练教师模型,然后使用训练有素的老师将伪标签渲染用于学生培训。通过这样做,伪标签将被更新,并且随着培训的进度更加精确。上一个和我们的方法之间的关键区别在于,我们在学生培训过程中更新教师模型。因此,在学生培训过程中,提高了伪标签的质量。我们还提出了一种简单但有效的策略,以使用动量模型来提高伪标签的质量 - 训练过程中原始模型的慢复制版本。通过应用动量模型与学生培训期间的重新渲染伪标签相结合,我们在五个数据集中平均达到了84.1%的骰子分数(即Kvarsir,CVC-ClinicdB,Etis-laribpolypdb,cvc-colondb,cvc-colondb,cvc-colondb和cvc-300)和CVC-300)只有20%的数据集用作标记数据。我们的结果超过了3%的共同实践,甚至在某些数据集中取得了完全监督的结果。我们的源代码和预培训模型可在https://github.com/sun-asterisk-research/online学习SSL上找到
translated by 谷歌翻译